Learning Crowd Behaviour with Neuroevolution Master ’ s thesis Pascal
نویسنده
چکیده
Many different techniques are used to mimic human behaviour in order to create realistic crowd simulations. Agent-based approaches, while having the most potential for realism, traditionally required carefully hand-crafted rules. In recent years the focus has shifted from hand-crafting decision rules to learning them through methods such as reinforcement learning. In this work a closer look is taken on the suitability of a prominent neuroevolution method called NeuroEvolution of Augmenting Topologies (NEAT). Agents are controlled using an artificial neural network, which is evolved over generations in typical crowd simulation scenarios. The evolved control logic is then replicated to many agents and the emergent crowd behaviour is empirically evaluated.
منابع مشابه
Towards learning movement in dense crowds for a socially-aware mobile robot
Robots moving in a crowd occasionally reach situations where they need to decide whether to give way to a human or not, a situation we call a micro-conflict and model with a two player game. We collect data from a robot controlled by a human operator and use three different supervised learning algorithms (random forest, SVM and neuroevolution) to create a decision maker module which imitates th...
متن کاملIntelligent Embodied Agents within a Physically Accurate Environment
EVOLVING INTELLIGENT EMBODIED AGENTS WITHIN A PHYSICALLY ACCURATE ENVIRONMENT By Gene D. Ruebsamen December 2002 This thesis explores the application of evolutionary reinforcement learning techniques for evolving behaviorisms in embodied agents existing within a realistic virtual environment that are subject of the constraints as defined by the Newtonian model of physics. Evolutionary reinforce...
متن کاملLearning Othello using Cooperative and Competitive Neuroevolution
From early days in computing, making computers play games like chess and Othello with a high level of skill has been a challenging and, lately, rewarding task. As computing power becomes increasingly more powerful, more and more complex learning techniques are employed to allow computers to learn different tasks. Games, however, remain a challenging and exciting domain for testing new technique...
متن کاملSolving Multiple Isolated, Interleaved, and Blended Tasks through Modular Neuroevolution
Many challenging sequential decision-making problems require agents to master multiple tasks. For instance, game agents may need to gather resources, attack opponents, and defend against attacks. Learning algorithms can thus benefit from having separate policies for these tasks, and from knowing when each one is appropriate. How well this approach works depends on how tightly coupled the tasks ...
متن کاملNeuroevolution
Neuroevolution is a method for modifying neural network weights, topologies, or ensembles in order to learn a specific task. Evolutionary computation is used to search for network parameters that maximize a fitness function that measures performance in the task. Compared to other neural network learning methods, neuroevolution is highly general, allowing learning without explicit targets, with ...
متن کامل